Artificial neural net attractors

نویسنده

  • Julien Clinton Sprott
چکیده

ÐAesthetically appealing patterns are produced by the dynamical behavior of arti®cial neural networks with randomly chosen connection strengths. These feed-forward networks have a single hidden layer of neurons and a single output, which is fed back to the input to produce a scalar time series that is always bounded and often chaotic. Sample attractors are shown and simple computer code is provided to encourage experimentation. # 1998 Published by Elsevier Science Ltd. All rights reserved Dynamical systems modeled by nonlinear maps and ̄ows can produce an astonishing variety of aesthetically appealing visual forms [1]. One such nonlinear map is an arti®cial neural network [2]. Neural networks o€er a number of advantages as generators of interesting visual patterns: 1. Their outputs are automatically bounded with the proper choice of a squashing function. 2. Neural networks are universal approximators [3] and hence are capable of generating any pattern if they are suciently complicated. 3. There is a large literature on the design, training, and behavior of neural networks. 4. Neural networks mimic the operation of the human brain, and hence they are a natural choice for emulating human-generated art. 5. In principle, they can be trained to improve the quality of their art, just as a human can be trained. There are many possible neural network architectures. The one used here is the feed-forward network shown in Fig. 1. It has an input layer with D elements (y1, y2, . . . ,yD) a hidden layer of N neurons (x1, x2, . . .xN), and a single output y0. The network is characterized by the equations: xi ˆ tanh XD

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continuous Attractors in Recurrent Neural Networks and Phase Space Learning

Recurrent networks can be used as associative memories where the stored memories represent fixed points to which the dynamics of the network converges. These networks, however, also can present continuous attractors, as limit cycles and chaotic attractors. The use of these attractors in recurrent networks for the construction of associative memories is argued. Here, we provide a training algori...

متن کامل

Phase synchronization and chaotic dynamics in Hebbian learned artificial recurrent neural networks

All experiments and results to be discovered in this paper have to be assessed at the crossroad of two basic lines of research: increasing the storing capacity of recurrent neural networks as much as possible and observing and studying how this increase impacts the dynamical regimes proposed by the net in order to allow such a huge storing. Seminal observations performed by Skarda and Freeman [...

متن کامل

Learning Cycles brings Chaos in Continuous Hopfield Networks

This paper aims at studying the impact of an hebbian learning algorithm on the recurrent neural network’s underlying dynamics. Two different kinds of learning are compared in order to encode information in the attractors of the Hopfield neural net: the storing of static patterns and the storing of cyclic patterns. We show that if the storing of static patterns leads to a reduction of the potent...

متن کامل

The number of direct attractors in discrete state neural networks

The maximunl possible nunlber of isotropic direct attractors of

متن کامل

Neural Network Mechanisms Underlying Stimulus Driven Variability Reduction

It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions de...

متن کامل

Transient hidden chaotic attractors in a Hopfield neural system

In this letter we unveil the existence of transient hidden coexisting chaotic attractors, in a simplified Hopfield neural network with three neurons. keyword Hopfield neural network; Transient hidden chaotic attractor; Limit cycle

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Computers & Graphics

دوره 22  شماره 

صفحات  -

تاریخ انتشار 1998